Tuning Parameter Selection for Penalized Likelihood Estimation of Inverse Covariance Matrix
نویسندگان
چکیده
In a Gaussian graphical model, the conditional independence between two variables are characterized by the corresponding zero entries in the inverse covariance matrix. Maximum likelihood method using the smoothly clipped absolute deviation (SCAD) penalty (Fan and Li, 2001) and the adaptive LASSO penalty (Zou, 2006) have been proposed in literature. In this article, we establish the result that using Bayesian information criterion (BIC) to select the tuning parameter in penalized likelihood estimation with both types of penalties can lead to consistent graphical model selection. We compare the empirical performance of BIC with cross validation method and demonstrate the advantageous performance of BIC criterion for tuning parameter selection through simulation studies.
منابع مشابه
Shrinkage Tuning Parameter Selection in Precision Matrices Estimation
Recent literature provides many computational and modeling approaches for covariance matrices estimation in a penalized Gaussian graphical models but relatively little study has been carried out on the choice of the tuning parameter. This paper tries to fill this gap by focusing on the problem of shrinkage parameter selection when estimating sparse precision matrices using the penalized likelih...
متن کاملSparse estimation of a covariance matrix.
We suggest a method for estimating a covariance matrix on the basis of a sample of vectors drawn from a multivariate normal distribution. In particular, we penalize the likelihood with a lasso penalty on the entries of the covariance matrix. This penalty plays two important roles: it reduces the effective number of parameters, which is important even when the dimension of the vectors is smaller...
متن کاملA path following algorithm for Sparse Pseudo-Likelihood Inverse Covariance Estimation (SPLICE)
Given n observations of a p-dimensional random vector, the covariance matrix and its inverse (precision matrix) are needed in a wide range of applications. Sample covariance (e.g. its eigenstructure) can misbehave when p is comparable to the sample size n. Regularization is often used to mitigate the problem. In this paper, we proposed an `1 penalized pseudo-likelihood estimate for the inverse ...
متن کاملCovariance selection and estimation via penalised normal likelihood
We propose a nonparametric method to identify parsimony and to produce a statistically efficient estimator of a large covariance matrix. We reparameterise a covariance matrix through the modified Cholesky decomposition of its inverse or the one-step-ahead predictive representation of the vector of responses and reduce the nonintuitive task of modelling covariance matrices to the familiar task o...
متن کامل0 Sparse Inverse Covariance Estimation
Recently, there has been focus on penalized loglikelihood covariance estimation for sparse inverse covariance (precision) matrices. The penalty is responsible for inducing sparsity, and a very common choice is the convex l1 norm. However, the best estimator performance is not always achieved with this penalty. The most natural sparsity promoting “norm” is the non-convex l0 penalty but its lack ...
متن کامل